翻訳と辞書 |
Statistical semantics : ウィキペディア英語版 | Statistical semantics
Statistical semantics is the study of "how the statistical patterns of human word usage can be used to figure out what people mean, at least to a level sufficient for information access" . How can we figure out what words mean, simply by looking at patterns of words in huge collections of text? What are the limits to this approach to understanding words? ==History==
The term ''Statistical Semantics'' was first used by Warren Weaver in his well-known paper on machine translation. He argued that word sense disambiguation for machine translation should be based on the co-occurrence frequency of the context words near a given target word. The underlying assumption that "a word is characterized by the company it keeps" was advocated by J.R. Firth. This assumption is known in Linguistics as the Distributional Hypothesis. Emile Delavenay defined ''Statistical Semantics'' as "Statistical study of meanings of words and their frequency and order of recurrence." "Furnas ''et al.'' 1983" is frequently cited as a foundational contribution to Statistical Semantics. An early success in the field was Latent Semantic Analysis.
抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Statistical semantics」の詳細全文を読む
スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース |
Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.
|
|